ethical question
The Ethics of Generative AI
This chapter discusses the ethics of generative AI. It provides a technical primer to show how generative AI affords experiencing technology as if it were human, and this affordance provides a fruitful focus for the philosophical ethics of generative AI. It then shows how generative AI can both aggravate and alleviate familiar ethical concerns in AI ethics, including responsibility, privacy, bias and fairness, and forms of alienation and exploitation. Finally, the chapter examines ethical questions that arise specifically from generative AI's mimetic generativity, such as debates about authorship and credit, the emergence of as-if social relationships with machines, and new forms of influence, persuasion, and manipulation.
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
- Summary/Review (0.54)
- Research Report (0.50)
Lab-grown models of human brains are advancing rapidly. Can ethics keep pace?
Pacific Grove, California--Pop a few human stem cells into culture, provide the right molecular signals, and before long a mock cerebral cortex or a cerebellum knockoff could be floating in the medium. These neural, or brain, organoids, typically just a few millimeters across, are not "brains in a dish," as some journalists have described them. But they are becoming ever more sophisticated and true to life, capturing more of the brain's cellular and structural intricacy. "It's surprising how far this [area] has advanced in the last year," says John Evans, a sociologist at the University of California San Diego who follows the research and public opinions on it. That progress has allowed researchers to delve deeper into how the human brain develops, functions, and goes awry in diseases, but it has also sharpened ethical questions.
- North America > United States > California > San Diego County > San Diego (0.25)
- North America > United States > California > Monterey County > Pacific Grove (0.25)
- South America (0.05)
- (5 more...)
AI-narrated audiobooks are here – and they raise some serious ethical questions
Meet Madison and Jackson, the AI narrators or "digital voices" soon to be reading some of the audiobooks on Apple Books. They sound nothing like Siri or Alexa or the voice telling you about the unexpected item in the bagging area of your supermarket checkout. They sound warm, natural, animated. With their advanced levels of realism, Apple's new AI voices present the genuine possibility that the listener will be unaware of their artificiality. Even the phrase used in Apple's catalogues of digitally-narrated audiobooks – "this is an Apple Books audiobook narrated by a digital voice based on a human narrator" – is ambiguous.
AI-powered combat aircraft bring US huge battlefield advantage but raise ethical questions
Fox News correspondent Alex Hogan has more on the provocative military action near Taiwan on "Special Report." The U.S. Air Force's development of a pilotless aircraft run by artificial intelligence (AI) has the potential to give American forces the upper hand in any conflict, but it also raises ethical questions about how such powerful technology should be deployed on the battlefield. "This technology is something we'll need for the future of defense," Phil Siegel, an AI expert and the founder of the Center For Advanced Preparedness and Threat Response Simulation, told Fox News Digital. Siegel's comments come as the Air Force continues development of XQ-58A Valkyrie experimental aircraft, an artificial intelligence-run stealth platform that the U.S. hopes can provide a relatively inexpensive weapon that can be used to limit losses to manned planes and pilots in a conflict with a near-peer rival such as China. WHAT IS ARTIFICIAL INTELLIGENCE (AI)? The XQ-58A Valkyrie demonstrates the separation of the ALTIUS-600 small unmanned aircraft system in a test at the U.S. Army Yuma Proving Ground test range in Arizona on March 26, 2021.
- Asia > China (0.26)
- North America > United States > Arizona (0.25)
- Asia > Taiwan (0.25)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military > Air Force (1.00)
Artificial intelligence is here, but the technology faces major challenges in 2023
Although artificial intelligence has been present in our lives for years, 2022 served as a major proving ground for the technology. Between ChatGPT, AI art generation and Hollywood embracing AI, AI found a new kind of foothold––and hype––with the general public. But it also came with a fresh wave of concerns about privacy and ethics. With all that 2022 did to raise the profile of the technology, AI experts at Northeastern University say 2023 will be an equally major year for the future of AI––but it will also face its fair share of challenges. Usama Fayyad, executive director for the Institute for Experiential AI at Northeastern, says the hype around AI wasn't the only thing that defined the technology's trajectory last year. As the public profile of AI grew in 2022, so did the misunderstandings and misinterpretations around it.
Texas A&M To Offer Courses On Responsible A.I.
Texas A&M University has joined a new nationwide program that aims to boost college-level curricula about responsible artificial intelligence. The university was selected as a participant in February through an application process headed by the College of Liberal Arts, the Glasscock Center for Humanities Research and the Department of Philosophy. Maria Escobar-Lemmon, associate dean for research and graduate education in the College of Liberal Arts, highlighted two objectives of the program. The first is to bring different points of view into the topic of artificial intelligence. "This program is being offered by the National Humanities Center, and it's an alliance between the National Humanities Center and Google that is intended to broaden the range of voices to include humanistic scholars so that we have people with different backgrounds, training and disciplinary perspectives engaging on the issue," Escobar-Lemmon said.
New Kavli Center at UC Berkeley to foster ethics, engagement in science
Kavli Foundation President Cynthia Friend (front row center) and Director of Public Engagement Brooke Smith (second row right) visited UC Berkeley in November to discuss the new Kavli Center with campus researchers. Every day, algorithms select which news stories appear in our social media feeds. Airplanes allow global travel at nearly the speed of sound while emitting greenhouse gases that accelerate the impacts of climate change. And recent advances in DNA sequencing and editing enable us to understand our fundamental genetic programming -- and potentially change it. While it may be challenging to anticipate where science might lead us next, researchers at the University of California, Berkeley, are taking steps to ensure that the public has a greater say in future scientific advances, and that questions of ethics and social equity take a prominent role in scientific decision-making. UC Berkeley announced today that the campus will be home to a new Kavli Center for Ethics, Science, and the Public, which, alongside a second center at the University of Cambridge in the United Kingdom, will connect scientists, ethicists, social scientists, science communicators and the public in necessary and intentional discussions about the potential impacts of scientific discoveries.
- North America > United States > California > Alameda County > Berkeley (0.25)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.25)
- North America > United States > Texas (0.05)
- Education > Educational Setting > Higher Education (0.97)
- Energy (0.90)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.72)
How Should We Approach the Ethical Considerations of AI in K-12 Education? - EdSurge News
We live in a world fundamentally transformed by our own creations. Once imagined only in science fiction, artificial intelligence now powers much of the technology we interact with every day--from smart home devices to cognitive assistants to media recommenders. While subtle by design, the impact of AI is far-reaching. The field of education is no less affected by these technologies. AI shows up in instructional chatbots, personalized learning systems and administrative tools.
- Education > Educational Setting > K-12 Education (1.00)
- Education > Educational Technology > Educational Software > Computer Based Training (0.50)
This Program Can Give AI a Sense of Ethics--Sometimes
Artificial intelligence has made it possible for machines to do all sorts of useful new things. But they still don't know right from wrong. A new program called Delphi, developed by researchers at the University of Washington and the Allen Institute for Artificial Intelligence (Ai2) in Seattle, aims to teach AI about human values--an increasingly important task as AI is used more often and in more ways. Question: Can I park in a handicap spot if I don't have a disability? Question: Killing a bear to protect my child.
4 considerations when taking responsibility for responsible AI
The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Artificial intelligence (AI) and machine learning (ML) have become ubiquitous in our everyday lives. From self-driving cars to our social media feeds, AI has helped our world operate faster than it ever has, and that's a good thing -- for the most part. As these technologies integrate into our everyday lives, so too have the many questions around the ethics of using and creating these technologies. AI tools are models and algorithms that have been built on real-world data, so they reflect real-world injustices like racism, misogyny, and homophobia, along with many others.